The optimization of swarm intelligence algorithms is a main way to improve swarm intelligence algorithms. As the swarm intelligence algorithms are more and more widely used in all kinds of model optimization, production scheduling, path planning and other problems, the demand for performance of intelligent algorithms is also getting higher and higher. As an important means to optimize swarm intelligence algorithms, subgroup strategies can balance the global exploration ability and local exploitation ability flexibly, and has become one of the research hotspots of swarm intelligence algorithms. In order to promote the development and application of subgroup strategies, the dynamic subgroup strategy, the subgroup strategy based on master-slave paradigm, and the subgroup strategy based on network structure were investigated in detail. The structural characteristics, improvement methods and application scenarios of various subgroup strategies were expounded. Finally, the current problems and the future research trends and development directions of the subgroup strategies were summarized.
For the current water conservancy dams mainly rely on manual on-site inspections, which have high operating costs and low efficiency, an improved detection algorithm based on YOLOv5 was proposed. Firstly, a modified multi-scale visual Transformer structure was used to improve the backbone, and the multi-scale global information associated with the multi-scale Transformer structure and the local information extracted by Convolutional Neural Network (CNN) were used to construct the aggregated features, thereby making full use of the multi-scale semantic information and location information to improve the feature extraction capability of the network. Then, coordinate attention mechanism was added in front of each feature detection layer of the network to encode features in the height and width directions of the image, and long-distance associations of pixels on the feature map were constructed by the encoded features to enhance the target localization ability of the network in complex environments. The sampling algorithm of the positive and negative training samples of the network was improved to help the candidate positive samples to respond to the prior frames of similar shape to themselves by constructing the average fit and difference between the prior frames and the ground-truth frames, so as to make the network converge faster and better, thus improving the overall performance of the network and the network generalization. Finally, the network structure was lightened for application requirements and was optimized by pruning the network structure and structural re-parameterization. Experimental results show that on the current adopted dam disease data, compared with the original YOLOv5s algorithm, the improved network has the mAP (mean Average Precision)@0.5 improved by 10.5 percentage points, the mAP@0.5:0.95 improved by 17.3 percentage points; compared to the network before lightening, the lightweight network has the number of parameters and the FLOPs(FLoating point Operations Per second) reduced by 24% and 13% respectively, and the detection speed improved by 42%, verifying that the network meets the requirements for precision and speed of disease detection in current application scenarios.
To solve the problems that traditional clustering algorithms are difficult to measure the sample similarity and have poor quality of filled data in the process of filling missing samples, a missing value attention clustering algorithm based on Latent Factor Model (LFM) in subspace was proposed. First, LFM was used to map the original data space to a low dimensional subspace to reduce the sparsity of samples. Then, the attention weight graph between different features was constructed by decomposing the feature matrix obtained from the original space, and the similarity calculation method between subspace samples was optimized to make the calculation of sample similarity more accurate and more generalized. Finally, to reduce the high time complexity in the process of sample similarity calculation, a multi-pointer attention weight graph was designed for optimization. The algorithm was tested on four proportional random missing datasets. On the Hand-digits dataset, compared with the KISC (K-nearest neighbors Interpolation Subspace Clustering) algorithm for high-dimensional feature missing data, when the missing data was 10%, the Accuracy (ACC) of the proposed algorithm was improved by 2.33 percentage points and the Normalized Mutual Information (NMI) was improved by 2.77 percentage points; when the missing data was 20%, the ACC of the proposed algorithm was improved by 0.39 percentage points, and the NMI was improved by 1.33 percentage points, which verified the effectiveness of the proposed algorithm.
Genotype imputation can compensate for the missing due to technical limitations by estimating the sample regions that are not covered in gene sequencing data with imputation, but the existing deep learning-based imputation methods cannot effectively capture the linkage among complete sequence loci, resulting in low overall imputation accuracy and high dispersion of batch sequence imputation accuracy. Therefore, FCSA (Fusing Convolution and Self-Attention), an imputation method that fuses convolution and self-attention mechanism, was proposed to address the above problems, and two fusion modules were used to form encoder and decoder to construct network model. In the encoder fusion module, a self-attention layer was used to obtain the correlation among complete sequence loci, and the local features were extracted through the convolutional layer after fusing the correlation to global loci. In the decoder fusion module, the local features of the encoded low-dimensional vector were reconstructed by convolution, and the complete sequence was modeled and fused by self-attention layer. The genetic data of multiple species of animals were used for model training, and the comparison and validation were carried out on Dog, Pig and Chicken datasets. The results show that compared to SCDA (Sparse Convolutional Denoising Autoencoders), AGIC (Autoencoder Genome Imputation and Compression) and U-net, FCSA achieves the highest average imputation accuracy at 10%, 20% and 30% missing rate. Ablation experimental results also show that the design of the two fusion modules is effective in improving the accuracy of genotype imputation.
With the proliferation of Artificial Intelligence (AI) computing power to the edge of the network and even to terminal devices, the computing power network of end-edge-supercloud collaboration has become the best computing solution. The emerging new opportunities have spawned the deep integration between end-edge-supercloud computing and the network. However, the complete development of the integrated system is unsolved, including adaptability, flexibility, and valuability. Therefore, a computing power network for ubiquitous AI named ACPN was proposed with the assistance of blockchain. In ACPN, the end-edge-supercloud collaboration provides infrastructure for the framework, and the computing power resource pool formed by the infrastructure provides safe and reliable computing power for the users, the network satisfies users’ demands by scheduling resources, and the neural network and execution platform in the framework provide interfaces for AI task execution. At the same time, the blockchain guarantees the reliability of resource transaction and encourage more computing power contributors to join the platform. This framework provides adaptability for users of computing power network, flexibility for resource scheduling of networking computing power, and valuability for computing power providers. A clear description of this new computing power network architecture was given through a case.
Resource load prediction with high accuracy can provide a basis for real-time task scheduling, thus reducing energy consumption. However, most prediction models for time series of resource load make short-term or long-term prediction by extracting the long-time series dependence characteristics of time series and neglecting the short-time series dependence characteristics of time series. In order to make a better long-term prediction of resource load, a new edge computing resource load prediction model based on long-short time series feature fusion was proposed. Firstly, the Gram Angle Field (GAF) was used to transform time series into image format data, so as to extract features by Convolutional Neural Network (CNN). Then, the CNN was used to extract spatial features and short-term data features, the Long Short-Term Memory (LSTM) network was used to extract the long-term time series dependent features of time series. Finally, the extracted long-term and short-term time series dependent features were fused through dual-channel to realize long-term resource load prediction. Experimental results show that, the Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and R-squared(R2) of the proposed model for CPU resource load prediction in Alibaba cloud clustering tracking dataset are 3.823, 5.274, and 0.815 8 respectively. Compared with the single-channel CNN and LSTM models, dual-channel CNN+LSTM and ConvLSTM+LSTM models, and resource load prediction models such as LSTM Encoder-Decoder (LSTM-ED) and XGBoost, the proposed model can provide higher prediction accuracy.
Focusing on the problems of the large time consumption of manual detection and the insufficient precision of the current detection methods of elongated pavement distress, a two-stage elongated pavement distress detection method, named Epd RCNN (Elongated pavement distress Region-based Convolutional Neural Network), which could accurately locate and classify the distress was proposed according to the weak semantic characteristics and abnormal geometric properties of the distress. Firstly, for the weak semantic characteristics of elongated pavement distress, a backbone network that reused low-level features and repeatedly fused the features of different stages was proposed. Secondly, in the training process, the high-quality positive samples for network training were generated by the anchor box mechanism conforming to the geometric property distribution of the distress. Then, the distress bounding boxes were predicted on a single high-resolution feature map, and a parallel cascaded dilated convolution module was used to this feature map to improve its multi-scale feature representation ability. Finally, for different shapes of region proposals, the region proposal features conforming to the distress geometric properties were extracted by the proposal feature improvement module composed of deformable Region of Interest Pooling (RoI Pooling) and spatial attention module. Experimental results show that the proposed method has the mean Average Precision (mAP) of 0.907 on images with sufficient illumination, the mAP of 0.891 on images with illumination problems and the comprehensive mAP of 0.899, indicating that the proposed method has good detection performance and robustness to illumination.
It is difficult to judge the image multiply distortion type. In order to solve the problem, based on the idea of deep learning multi-label classification, a new multiply distortion type judgement method based on multi-scale and multi-classifier Convolutional Neural Network (CNN) was proposed. Firstly, the image block containing high-frequency information was obtained from the image, and the image block was input into the convolution layers of different receptive fields to extract the shallow feature maps of the image. Then, the shallow feature maps were input into the structure of each sub-classifier for deep feature extraction and fusion, and the fused features were judged by the Sigmoid classifier. Finally, the judgment results of different sub-classifiers were fused to obtain the multiply distortion type of image. Experimental results show that, on the Natural Scene Mixed Disordered Images Database (NSMDID), the average judgment accuracy of the proposed method can reach 91.4% for different types of multiply distortion types in the images, and most of them are above 96.8%, illustrating that the proposed method can effectively judge the types of distortion in multiply distortion images.